An Efficient Adversarial Learning Strategy for Constructing Robust Classification Boundaries
نویسندگان
چکیده
Traditional classification methods assume that the training and the test data arise from the same underlying distribution. However in some adversarial settings, the test set can be deliberately constructed in order to increase the error rates of a classifier. A prominent example is email spam where words are transformed to avoid word-based features embedded in a spam filter. Recent research has modeled interactions between a data miner and an adversary as a sequential Stackelberg game, and solved its Nash equilibrium to build classifiers that are more robust to subsequent manipulations on training data sets. However in this paper we argue that the iterative algorithm used in the Stackelberg game, which solves an optimization problem at each step of play, is sufficient but not necessary for achieving Nash equilibria in classification problems. Instead, we propose a method that transforms singular vectors of a training data matrix to simulate manipulations by an adversary, and from that perspective a Nash equilibrium can be obtained by solving a novel optimization problem only once. We show that compared with the iterative algorithm used in recent literature, our one-step game significantly reduces computing time while still being able to produce good Nash equilibria results.
منابع مشابه
Open Category Classification by Adversarial Sample Generation
In real-world classification tasks, it is difficult to collect training samples from all possible categories of the environment. Therefore, when an instance of an unseen class appears in the prediction stage, a robust classifier should be able to tell that it is from an unseen class, instead of classifying it to be any known category. In this paper, adopting the idea of adversarial learning, we...
متن کاملRobust Opponent Modeling in Real-Time Strategy Games using Bayesian Networks
Opponent modeling is a key challenge in Real-Time Strategy (RTS) games as the environment is adversarial in these games, and the player cannot predict the future actions of her opponent. Additionally, the environment is partially observable due to the fog of war. In this paper, we propose an opponent model which is robust to the observation noise existing due to the fog of war. In order to cope...
متن کاملThe Space of Transferable Adversarial Examples
Adversarial examples are maliciously perturbed inputs designed to mislead machine learning (ML) models at test-time. Adversarial examples are known to transfer across models: a same perturbed input is often misclassified by different models despite being generated to mislead a specific architecture. This phenomenon enables simple yet powerful black-box attacks against deployed ML systems. In th...
متن کاملMultiple classifier systems for robust classifier design in adversarial environments
Pattern recognition systems are increasingly being used in adversarial environments like network intrusion detection, spam filtering and biometric authentication and verification systems, in which an adversary may adaptively manipulate data to make a classifier ineffective. Current theory and design methods of pattern recognition systems do not take into account the adversarial nature of such k...
متن کاملTowards Robust Deep Neural Networks with BANG
Machine learning models, including state-of-the-art deep neural networks, are vulnerable to small perturbations that cause unexpected classification errors. This unexpected lack of robustness raises fundamental questions about their generalization properties and poses a serious concern for practical deployments. As such perturbations can remain imperceptible – commonly called adversarial exampl...
متن کامل